72 research outputs found

    Progettazione e sviluppo di algoritmi di classificazione basati su regole fuzzy indotte dai dati

    Get PDF
    La classificazione automatica di oggetti in un insieme predefinito di classi riveste un ruolo chiave nell’area dei sistemi intelligenti. Lo sviluppo di algoritmi di classificazione ha attirato un considerevole interesse nella comunità dell’intelligenza artificiale sin dai suoi primi sviluppi, negli anni 50’ e 60’. L’avvento della logica fuzzy, sviluppata da Zadeh a partire dal 1965, ha aperto nuove strade per risolvere il problema della classificazione. In particolare, i classificatori basati su regole fuzzy di tipo se-allora costituiscono oggigiorno una interessante alternativa a metodi ormai più tradizionali, quali ad esempio i classificatori statistici e quelli neuronali, per la loro capacità di essere comprensibili dall’uomo. In questa tesi si è scelto di implementare in ambiente Matlab le versioni più comuni dei classificatori fuzzy, all’interno però della libreria PRTools. PRTools, infatti, è una libreria (libera per i ricercatori e a pagamento per le industrie) sviluppata all’Università di Delft in Olanda, per l’ambiente Matlab. Nel tempo, è diventata la libreria di riferimento per la Pattern Recognition in Matlab. Purtroppo però tale libreria non fornisce classificatori fuzzy e pertanto questo lavoro di tesi intende colmare questa lacuna. Il vantaggio di disporre di classificatori fuzzy all’interno di PRTools risiede nel fatto che altri utilizzatori di questo strumento troveranno del tutto confortevole utilizzare il nuovo classificatore messo a disposizione, denominato frbc, al fianco di quelli standard già presenti. In tal modo risulterà agevole confrontare fra loro le prestazioni di classificatori diversi al fine di selezionare quello migliore. Inoltre, e non meno importante, tutte le funzioni della libreria PRTools che si applicano a classificatori standard saranno utilizzabili anche per il classificatore fuzzy frbc. Per tutte queste ragioni riteniamo tale lavoro di interesse sia per le comunità di ricerca dell’intelligenza artificiale e computazionale che per gli utilizzatori finali

    Detection of traffic congestion and incidents from GPS trace analysis

    Get PDF
    This paper presents an expert system for detecting traffic congestion and incidents from real-time GPS data collected from GPS trackers or drivers’ smartphones. First, GPS traces are pre-processed and placed in the road map. Then, the system assigns to each road segment of the map a traffic state based on the speeds of the vehicles. Finally, it sends to the users traffic alerts based on a spatiotemporal analysis of the classified segments. Each traffic alert contains the affected area, a traffic state (e.g., incident, slowed traffic, blocked traffic), and the estimated velocity of vehicles in the area. The proposed system is intended to be a valuable support tool in traffic management for municipalities and citizens. The information produced by the system can be successfully employed to adopt actions for improving the city mobility, e.g., regulate vehicular traffic, or can be exploited by the users, who may spontaneously decide to modify their path in order to avoid the traffic jam. The elaboration performed by the expert system is independent of the context (urban o non-urban) and may be directly employed in several city road networks with almost no change of the system parameters, and without the need for a learning process or historical data. The experimental analysis was performed using a combination of simulated GPS data and real GPS data from the city of Pisa. The results on incidents show a detection rate of 91.6%, and an average detection time lower than 7 min. Regarding congestion, we show how the system is able to recognize different levels of congestion depending on different road use

    An ensemble of learning machines for quantitative analysis of bronze alloys

    Get PDF
    We deal with the determination of the composition of bronze alloys measured through Laser-Induced Breakdown Spectroscopy (LIBS) analysis. The relation between LIBS spectra and bronze alloy composition, represented by means of the concentrations of constituting elements, is modeled by adopting an ensemble of learning machines, fed with different inputs. Then, the combiner computes the final response. The results obtained on the test set show that the ensemble model manages to determine the composition of alloy samples with mean squared error of about 6.53 10^-2

    Computational Intelligence for classification and forecasting of solar photovoltaic energy production and energy consumption in buildings

    Get PDF
    This thesis presents a few novel applications of Computational Intelligence techniques in the field of energy-related problems. More in detail, we refer to the assessment of the energy produced by a solar photovoltaic installation and to the evaluation of building’s energy consumptions. In fact, recently, thanks also to the growing evolution of technologies, the energy sector has drawn the attention of the research community in proposing useful tools to deal with issues of energy efficiency in buildings and with solar energy production management. Thus, we will address two kinds of problem. The first problem is related to the efficient management of solar photovoltaic energy installations, e.g., for efficiently monitoring the performance as well as for finding faults, or for planning the energy distribution in the electrical grid. This problem was faced with two different approaches: a forecasting approach and a fuzzy classification approach for energy production estimation, starting from some knowledge about environmental variables. The forecasting system developed is able to reproduce the instantaneous curve of daily energy produced by the solar panels of the installation, with a forecasting horizon of one day. It combines neural networks and time series analysis models. The fuzzy classification system, rather, extracts some linguistic knowledge about the amount of energy produced by the installation, exploiting an optimal fuzzy rule base and genetic algorithms. The developed model is the result of a novel hierarchical methodology for building fuzzy systems, which may be applied in several areas. The second problem is related to energy efficiency in buildings, for cost reduction and load scheduling purposes, and was tackled by proposing a forecasting system of energy consumption in office buildings. The proposed system exploits a neural network to estimate the energy consumption due to lighting on a time interval of a few hours, starting from considerations on available natural daylight

    UHF-RFID smart gate: Tag action classifier by artificial neural networks

    Get PDF
    The application of Artificial Neural Networks (ANNs) to discriminate tag actions in UHF-RFID gate is presented in this paper. By exploiting Received Signal Strength Indicator values acquired in a real experimental scenario, a multi-layer perceptron neural network is trained to distinguish among tags incoming, outgoing or passing the RFID gate. A 99% accuracy can be obtained in tag classification by employing only one reader antenna and independently from tag orientation and typology

    Prognostic significance of primary-tumor extension, stage and grade of nuclear differentiation in patients with renal cell carcinoma

    Get PDF
    Surgery remains the preferred therapy for renal cell carcinoma. The various adjunctive or complementary therapies currently yield disappointing results. Identifying reliable prognostic factors could help in selecting patients most likely to benefit from postoperative adjuvant therapies. We reviewed the surgical records of 78 patients who had undergone radical nephrectomy with lymphadenectomy for renal cell carcinoma, matched for type of operation and histology. According to staging (TNM), 5.1% of the patients were classified as stage I, 51.3% as stage II, 29.5% as stage III and 14.5% as stage IV. Of the 78 patients 40 were T2N0 and 21 T3aN0. Tumor grading showed that 39.7% of the patients had well-differentiated tumors(G1), 41.1% moderately-differentiated (G2), and 19.2% poorly-differentiated tumors (G3). Overall actuarial survival at 5 and 10 years was 100% for stage 1; 91.3% at 5 years and 83.1% at 10 years for stage II; 45.5% and 34.1% for stage III; and 29.1% and nil for stage IV (stage II vs stage III p = 0.0001). Patients with tumors confined to the kidney (pT2N0) had better 5- and 10-year survival rates than patients with tumors infiltrating the perirenal fat (pT3aN0) (p = 0.000006). Survival differed according to nuclear grading (G1 vs G3 ; p = 0.000005; G2 vs G3; p = 0.0009). In conclusion our review identified tumor stage, primary-tumor extension, and the grade of nuclear differentiation as reliable prognostic factors in patients with renal cell carcinomas

    Path Clustering Based on a Novel Dissimilarity Function for Ride-Sharing Recommenders

    Get PDF
    Ride-sharing practice represents one of the possible answers to the traffic congestion problem in today's cities. In this scenario, recommenders aim to determine similarity among different paths with the aim of suggesting possible ride shares. In this paper, we propose a novel dissimilarity function between pairs of paths based on the construction of a shared path, which visits all points of the two paths by respecting the order of sequences within each of them. The shared path is computed as the shortest path on a directed acyclic graph with precedence constraints between the points of interest defined in the single paths. The dissimilarity function evaluates how much a user has to extend his/her path for covering the overall shared path. After computing the dissimilarity between any pair of paths, we execute a fuzzy relational clustering algorithm for determining groups of similar paths. Within these groups, the recommenders will choose users who can be invited to share rides. We show and discuss the results obtained by our approach on 45 paths

    Virtual unfolding of light sheet fluorescence microscopy dataset for quantitative analysis of the mouse intestine

    Get PDF
    Light sheet fluorescence microscopy has proven to be a powerful tool to image fixed and chemically cleared samples, providing in depth and high resolution reconstructions of intact mouse organs. We applied light sheet microscopy to image the mouse intestine. We found that large portions of the sample can be readily visualized, assessing the organ status and highlighting the presence of regions with impaired morphology. Yet, three-dimensional (3-D) sectioning of the intestine leads to a large dataset that produces unnecessary storage and processing overload. We developed a routine that extracts the relevant information from a large image stack and provides quantitative analysis of the intestine morphology. This result was achieved by a three step procedure consisting of: (1) virtually unfold the 3-D reconstruction of the intestine; (2) observe it layer-by-layer; and (3) identify distinct villi and statistically analyze multiple samples belonging to different intestinal regions. Even if the procedure has been developed for the murine intestine, most of the underlying concepts have a general applicability

    An artificial neural network approach to laser-induced breakdown spectroscopy quantitative analysis

    Get PDF
    The usual approach to laser-induced breakdown spectroscopy (LIBS) quantitative analysis is based on the use of calibration curves, suitably built using appropriate reference standards. More recently, statistical methods relying on the principles of artificial neural networks (ANN) are increasingly used. However, ANN analysis is often used as a 'black box' system and the peculiarities of the LIBS spectra are not exploited fully. An a priori exploration of the raw data contained in the LIBS spectra, carried out by a neural network to learn what are the significant areas of the spectrum to be used for a subsequent neural network delegated to the calibration, is able to throw light upon important information initially unknown, although already contained within the spectrum. This communication will demonstrate that an approach based on neural networks specially taylored for dealing with LIBS spectra would provide a viable, fast and robust method for LIBS quantitative analysis. This would allow the use of a relatively limited number of reference samples for the training of the network, with respect to the current approaches, and provide a fully automatizable approach for the analysis of a large number of samples

    A hybrid calibration-free/artificial neural networks approach to the quantitative analysis of LIBS spectra

    Get PDF
    A 'hybrid' method is proposed for the quantitative analysis of materials by LIBS, combining the precision of the calibration-free LIBS (CF-LIBS) algorithm with the quickness of artificial neural networks. The method allows the precise determination of the samples' composition even in the presence of relatively large laser fluctuations and matrix effects. To show the strength and robustness of this approach, a number of synthetic LIBS spectra of Cu-Ni binary alloys with different composition were computer-simulated, in correspondence of different plasma temperatures, electron number densities and ablated mass. The CFLIBS/ANN approach here proposed demonstrated to be capable, after appropriate training, of 'learning' the basic physical relations between the experimentally measured line intensities and the plasma parameters. Because of that the composition of the sample can be correctly determined, as in CF-LIBS measurements, but in a much shorter time
    • …
    corecore